Goto

Collaborating Authors

 ai tool generate convincing deepfake


What the Numbers Show About AI's Harms

TIME - Tech

Booth is a reporter at TIME. Booth is a reporter at TIME. With the widespread adoption of artificial intelligence around the world over the past year, the technology's potential to cause harm has become clearer. Reports of AI-related incidents rose 50% year-over-year from 2022 to 2024, and in the 10 months to October 2025, incidents had already surpassed the 2024 total, according to the AI Incident Database, a crowd-sourced repository of media reports on AI mishaps. Incidents arising from use of the technology, such as deepfake-enabled scams and chatbot-induced delusions have been rising steadily, according to the latest data.


Grok's deepfake crisis, explained

TIME - Tech

Welcome back to In the Loop, new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? In the past few weeks, many tech leaders have made bold predictions about what AI will achieve in 2026, from mastering the field of biology to surpassing human intelligence outright . But in 2026's first week, the most visible use of AI has been X users employing Grok to digitally disrobe women. Elon Musk's platform X has been flooded with nonconsensual AI-created images, requested by users, of unclothed or scantily-clad women, men and children, sometimes in sexual positions.


OpenAI's Sora Underscores the Growing Threat of Deepfakes

TIME - Tech

When OpenAI released its AI video-generation app, Sora, in September, it promised that "you are in control of your likeness end-to-end." The app allows users to include themselves and their friends in videos through a feature called "cameos"--the app scans a user's face and performs a liveness check, providing data to generate a video of the user and to authenticate their consent for friends to use their likeness on the app. But Reality Defender, a company specializing in identifying deepfakes, says it was able to bypass Sora's anti-impersonation safeguards within 24 hours. Platforms such as Sora give a "plausible sense of security," says Reality Defender CEO Ben Colman, despite the fact that "anybody can use completely off-the-shelf tools" to pass authentication as someone else. Reality Defender's researchers used publicly available footage of notable individuals, including CEOs and entertainers, from earnings calls and media interviews.


When Everything Is Fake, What's the Point of Social Media?

TIME - Tech

When Everything Is Fake, What's the Point of Social Media? Earlier this week, a heartwarming post about a girl, a puppy, and a police officer went viral across social media platforms. The post consisted of two dashcam images of a distraught 12-year-old who, desperate to heal her sick puppy, got behind the wheel for the first time and tried to drive to the vet. She was pulled over, but commended by a police officer for being "amazing, strong, compassionate, and smart," and the puppy was saved. Comments flooded in celebrating the bond between a girl and her furry best friend.


ByteDance's AI Videos Are Scary Realistic. That's a Problem for Truth Online.

TIME - Tech

ByteDance's AI Videos Are Scary Realistic. An image created with Bytedance's AI tool Seedream, via the platform Kapwing, of Minions playing basketball. This week, OpenAI released its latest AI video generation model, Sora 2, advertising it as a big leap forward for the space. As Sora hits the public, it will have to compete for market share in a crowded market, including with a major competitor that is rapidly gaining steam: the Chinese company ByteDance, which owns TikTok . In the past few months, ByteDance released Seedance, an AI video generator that many users are already calling the best in the world, and a new version of Seedream, an elite image model.


Google's New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

TIME - Tech

In a statement, a Google spokesperson said: "Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools." Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.